This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
已经开发了增强学习(RL)技术来优化工业冷却系统,与传统的启发式政策相比,提供了可观的节能。工业控制中的一个主要挑战涉及由于机械限制而在现实世界中可行的学习行为。例如,某些操作只能每隔几个小时执行一次,而其他动作可以更频繁地采取。如果没有广泛的奖励工程和实验,RL代理可能无法学习机械的现实操作。为了解决这个问题,我们使用层次结构的增强学习与多种根据操作时间尺度控制动作子集的代理。我们的分层方法可以在现有基线上节省能源,同时在模拟的HVAC控制环境中保持在安全范围内的限制(例如操作冷却器)。
translated by 谷歌翻译
我们提出了一个混合工业冷却系统模型,该模型将分析解决方案嵌入多物理模拟中。该模型设计用于增强学习(RL)应用程序,并平衡简单性与模拟保真度和解释性。该模型的忠诚度根据大规模冷却系统的现实世界数据进行了评估。接下来是一个案例研究,说明如何将模型用于RL研究。为此,我们开发了一个工业任务套件,该套件允许指定不同的问题设置和复杂性水平,并使用它来评估不同RL算法的性能。
translated by 谷歌翻译
K-FAC是一种可成功实施的自然梯度,用于深度学习,尽管如此,它仍然需要计算Kronecker因子倒数的要求(通过特征分类)。当这些因素很大时,这可能非常耗时(甚至是过时的)。在本文中,我们从理论上表明,由于通常使用的Kronecker因子的指数平均构造范式,其本特征谱必须衰减。我们在数值上表明,在实践中,这种衰减非常迅速,从而使我们的想法是,我们可以在反转Kronecker-factor时仅关注前几个特征模式来节省实质性计算。随机数值线性代数为我们提供了这样做的必要工具。数值结果表明,我们获得$ \ 22.5 \ times $缩短每个类别的时间,$ \ of $ \ of 3.3 \ times $减少了目标准确性的时间。我们将我们提出的K-FAC加速版本与更有效的NG实施(Seng)进行比较,并观察我们与之相当。
translated by 谷歌翻译
在针对机器学习(ML)的优化中,典型的曲率 - 矩阵(CM)估计依赖于局部估计的指数平均值(给出EA-CM算法)。这种方法几乎没有原则上的理由,但是经常在实践中使用。在本文中,我们在EA-CM算法和所谓的“二次正规化模型的唤醒”之间建立了联系。概述的连接使我们能够从优化的角度了解EA-CM算法正在做什么。从已建立的联系中概括,我们提出了一种新的算法系列,即“ KL-Divergence唤醒指定模型”(KLD-WRM)。我们给出了KLD-WRM的三种不同的实例化,并以数值的方式表明,这些实例化在MNIST上的表现优于K-FAC。
translated by 谷歌翻译
Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by leaving them on the web and waiting for them to be scraped by a data collection bot. We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a "watermarking" strategy that makes poisoning reliable using multiple (≈ 50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
translated by 谷歌翻译